13 research outputs found

    Activation in Right Dorsolateral Prefrontal Cortex Underlies Stuttering Anticipation

    Get PDF
    People who stutter learn to anticipate many of their overt stuttering events. Despite the critical role of anticipation, particularly how responses to anticipation shape stuttering behaviors, the neural bases associated with anticipation are unknown. We used a novel approach to identify anticipated and unanticipated words in 22 adult stutterers, which were produced in a delayed-response task while hemodynamic activity was measured using functional near infrared spectroscopy (fNIRS). Twenty-two control participants were included such that each individualized set of anticipated/unanticipated words was produced by one stutterer and one control. We conducted an analysis on the right dorsolateral prefrontal cortex (R-DLPFC) based on converging lines of evidence from the stuttering and cognitive control literatures. We also assessed connectivity between the R-DLPFC and right supramarginal gyrus (R-SMG), two key nodes of the frontoparietal network (FPN), to assess the role of cognitive control, particularly error-likelihood monitoring, in stuttering anticipation. All analyses focused on the five-second anticipation phase preceding the go signal to produce speech. Results indicate that anticipated words are associated with elevated activation in the R-DLPFC, and that compared to non-stutterers, stutterers exhibit greater activity in the R-DLPFC, irrespective of anticipation. Further, anticipated words are associated with reduced connectivity between the R-DLPFC and R-SMG. These findings highlight the potential roles of the R-DLPFC and the greater FPN as a neural substrate of stuttering anticipation. The results also support previous accounts of error-likelihood monitoring and action-stopping in stuttering anticipation. Overall, this work offers numerous directions for future research with clinical implications for targeted neuromodulation

    Neural correlates of eye contact and social function in autism spectrum disorder

    Get PDF
    Reluctance to make eye contact during natural interactions is a central diagnostic criterion for autism spectrum disorder (ASD). However, the underlying neural correlates for eye contacts in ASD are unknown, and diagnostic biomarkers are active areas of investigation. Here, neuroimaging, eye-tracking, and pupillometry data were acquired simultaneously using two-person functional near-infrared spectroscopy (fNIRS) during live "in-person" eye-to-eye contact and eye-gaze at a video face for typically-developed (TD) and participants with ASD to identify the neural correlates of live eye-to-eye contact in both groups. Comparisons between ASD and TD showed decreased right dorsal-parietal activity and increased right ventral temporal-parietal activity for ASD during live eye-to-eye contact (p≤0.05, FDR-corrected) and reduced cross-brain coherence consistent with atypical neural systems for live eye contact. Hypoactivity of right dorsal-parietal regions during eye contact in ASD was further associated with gold standard measures of social performance by the correlation of neural responses and individual measures of: ADOS-2, Autism Diagnostic Observation Schedule, 2nd Edition (r = -0.76, -0.92 and -0.77); and SRS-2, Social Responsiveness Scale, Second Edition (r = -0.58). The findings indicate that as categorized social ability decreases, neural responses to real eye-contact in the right dorsal parietal region also decrease consistent with a neural correlate for social characteristics in ASD

    People can understand descriptions of motion without activating visual motion brain regions

    Get PDF
    What is the relationship between our perceptual and linguistic neural representations of the same event? We approached this question by asking whether visual perception of motion and understanding linguistic depictions of motion rely on the same neural architecture. The same group of participants took part in two language tasks and one visual task. In task 1, participants made semantic similarity judgments with high motion (e.g., “to bounce”) and low motion (e.g., “to look”) words. In task 2, participants made plausibility judgments for passages describing movement (“A centaur hurled a spear … ”) or cognitive events (“A gentleman loved cheese …”). Task 3 was a visual motion localizer in which participants viewed animations of point-light walkers, randomly moving dots, and stationary dots changing in luminance. Based on the visual motion localizer we identified classic visual motion areas of the temporal (MT/MST and STS) and parietal cortex (inferior and superior parietal lobules). We find that these visual cortical areas are largely distinct from neural responses to linguistic depictions of motion. Motion words did not activate any part of the visual motion system. Motion passages produced a small response in the right superior parietal lobule, but none of the temporal motion regions. These results suggest that (1) as compared to words, rich language stimuli such as passages are more likely to evoke mental imagery and more likely to affect perceptual circuits and (2) effects of language on the visual system are more likely in secondary perceptual areas as compared to early sensory areas. We conclude that language and visual perception constitute distinct but interacting systems

    A sensitive period for language in the visual cortex: Distinct patterns of plasticity in congenitally versus late blind adults

    No full text
    Recent evidence suggests that blindness enables visual circuits to contribute to language processing. We examined whether this dramatic functional plasticity has a sensitive period. BOLD fMRI signal was measured in congenitally blind, late blind (blindness onset 9-years-old or later) and sighted participants while they performed a sentence comprehension task. In a control condition, participants listened to backwards speech and made match/non-match to sample judgments. In both congenitally and late blind participants BOLD signal increased in bilateral foveal-pericalcarine cortex during response preparation, irrespective of whether the stimulus was a sentence or backwards speech. However, left occipital areas (pericalcarine, extrastriate, fusiform and lateral) responded more to sentences than backwards speech only in congenitally blind people. We conclude that age of blindness onset constrains the non-visual functions of occipital cortex: while plasticity is present in both congenitally and late blind individuals, recruitment of visual circuits for language depends on blindness during childhood.David & Lucile Packard Foundatio

    Contrast results from channel-wise analysis (deOxyHb signals).

    No full text
    <p>Contrast results from channel-wise analysis (deOxyHb signals).</p

    Overlap of neurosynth right TPJ and gesture > color activity.

    No full text
    <p>Red area represents left temporal-parietal region of activity from the Gesture > Color contrast, p<0.001. Blue area shows forward inference map of the rTPJ from Neurosynth (<a href="http://neurosynth.org" target="_blank">http://neurosynth.org</a>) meta-analysis of 92 studies. Black dotted line surrounds area of overlap.</p

    Contrast effects: deOxyHb signals, n = 31.

    No full text
    <p>A) Activated clusters indicate the domain-general results of the Incongruent > Congruent contrast (p<0.005), with activity present in right DLPFC. B) Activated clusters indicate the domain-specific results of the Gesture > Color contrast (p<0.001), with activity present in right STG and left DLPFC. Black circles indicate the channel number and location of the significant channels (p<0.05) from the channel-wise analysis.</p

    Neural correlates of conflict between gestures and words: A domain-specific role for a temporal-parietal complex

    No full text
    <div><p>The interpretation of social cues is a fundamental function of human social behavior, and resolution of inconsistencies between spoken and gestural cues plays an important role in successful interactions. To gain insight into these underlying neural processes, we compared neural responses in a traditional color/word conflict task and to a gesture/word conflict task to test hypotheses of domain-general and domain-specific conflict resolution. In the gesture task, recorded spoken words (“yes” and “no”) were presented simultaneously with video recordings of actors performing one of the following affirmative or negative gestures: thumbs up, thumbs down, head nodding (up and down), or head shaking (side-to-side), thereby generating congruent and incongruent communication stimuli between gesture and words. Participants identified the communicative intent of the gestures as either positive or negative. In the color task, participants were presented the words “red” and “green” in either red or green font and were asked to identify the color of the letters. We observed a classic “Stroop” behavioral interference effect, with participants showing increased response time for incongruent trials relative to congruent ones for both the gesture and color tasks. Hemodynamic signals acquired using functional near-infrared spectroscopy (fNIRS) were increased in the right dorsolateral prefrontal cortex (DLPFC) for incongruent trials relative to congruent trials for both tasks consistent with a common, domain-general mechanism for detecting conflict. However, activity in the left DLPFC and frontal eye fields and the right temporal-parietal junction (TPJ), superior temporal gyrus (STG), supramarginal gyrus (SMG), and primary and auditory association cortices was greater for the gesture task than the color task. Thus, in addition to domain-general conflict processing mechanisms, as suggested by common engagement of right DLPFC, socially specialized neural modules localized to the left DLPFC and right TPJ including adjacent homologous receptive language areas were engaged when processing conflicting communications. These findings contribute to an emerging view of specialization within the TPJ and adjacent areas for interpretation of social cues and indicate a role for the region in processing social conflict.</p></div

    Gesture stroop stimuli and paradigm.

    No full text
    <p>A) Task design: Subjects indicated the meaning of the gesture as either positive or negative. Images represent video stills of four types of gesture: head nodding (up and down), head shaking (side-to-side), thumbs up, and thumbs down. Spoken words are super-imposed on video stills in each condition. Rows indicate the body part used in the gesture, i.e. head or hand. Columns indicate congruent and incongruent conditions, where gestures are congruent and incongruent with spoken words. B) Block design: 15s task block alternates with 15s rest block. 4 trials per block with ISI of 3.75s. Each block consisted predominantly of either congruent (C) or incongruent (I) trials, and contained one randomly positioned oddball trial.</p
    corecore